视觉语言预训练(VLP)模型在各种下游任务上表现出色。他们的成功在很大程度上取决于预训练的跨模式数据集的规模。但是,中文中缺乏大规模数据集和基准阻碍了中国VLP模型和更广泛的多语言应用程序的发展。在这项工作中,我们发布了一个名为Wukong的大型中国跨模式数据集,其中包含从网络收集的1亿个中文图像文本对。 Wukong旨在基准基准不同的多模式预训练方法,以促进VLP研究和社区发展。此外,我们发布了一组模型,预先训练了各种图像编码器(vit-b/vit-l/swint),还将高级预训练技术应用于VLP,例如锁定图像文本调整,相对于代币的相似性学习和减少互动。还提供了广泛的实验和不同下游任务的基准测试,包括新的最大人验证的图像文本测试数据集。实验表明,Wukong可以作为不同的跨模式学习方法的有前途的中国预培训数据集和基准。对于10个数据集上的零摄像图像分类任务,$ Wukong_ {vit-l} $达到的平均准确度为73.03%。对于图像文本检索任务,它在AIC-ICC上的平均召回率为71.6%,比Wenlan 2.0高12.9%。此外,我们的Wukong模型在下游任务上进行了基准测试,例如多个数据集上的其他变体,例如Flickr8k-CN,Flickr-30K-CN,Coco-CN,Coco-CN等。更多信息可以参考:https://wukong-dataset.github.io/wukong-dataset/。
translated by 谷歌翻译
Cross-domain graph anomaly detection (CD-GAD) describes the problem of detecting anomalous nodes in an unlabelled target graph using auxiliary, related source graphs with labelled anomalous and normal nodes. Although it presents a promising approach to address the notoriously high false positive issue in anomaly detection, little work has been done in this line of research. There are numerous domain adaptation methods in the literature, but it is difficult to adapt them for GAD due to the unknown distributions of the anomalies and the complex node relations embedded in graph data. To this end, we introduce a novel domain adaptation approach, namely Anomaly-aware Contrastive alignmenT (ACT), for GAD. ACT is designed to jointly optimise: (i) unsupervised contrastive learning of normal representations of nodes in the target graph, and (ii) anomaly-aware one-class alignment that aligns these contrastive node representations and the representations of labelled normal nodes in the source graph, while enforcing significant deviation of the representations of the normal nodes from the labelled anomalous nodes in the source graph. In doing so, ACT effectively transfers anomaly-informed knowledge from the source graph to learn the complex node relations of the normal class for GAD on the target graph without any specification of the anomaly distributions. Extensive experiments on eight CD-GAD settings demonstrate that our approach ACT achieves substantially improved detection performance over 10 state-of-the-art GAD methods. Code is available at https://github.com/QZ-WANG/ACT.
translated by 谷歌翻译
无监督的时间序列异常检测对各种域中目标系统的潜在故障有助于。当前的最新时间序列异常检测器主要集中于设计高级神经网络结构和新的重建/预测学习目标,以尽可能准确地学习数据正常(正常模式和行为)。但是,这些单级学习方法可以被训练数据中未知异常(即异常污染)所欺骗。此外,他们的正常学习也缺乏对感兴趣异常的知识。因此,他们经常学习一个有偏见的,不准确的正态边界。本文提出了一种新型的单级学习方法,称为校准的一级分类,以解决此问题。我们的单级分类器以两种方式进行校准:(1)通过适应性地惩罚不确定的预测,这有助于消除异常污染的影响,同时强调单级模型对一级模型有信心的预测,并通过区分正常情况来确定(2)来自本机异常示例的样本,这些样本是根据原始数据基于原始数据模拟真实时间序列异常行为的。这两个校准导致耐污染的,异常的单级学习,从而产生了显着改善的正态性建模。对六个现实世界数据集进行的广泛实验表明,我们的模型大大优于12个最先进的竞争对手,并获得了6%-31%的F1分数提高。源代码可在\ url {https://github.com/xuhongzuo/couta}中获得。
translated by 谷歌翻译
孤立森林(Iforest)近年来已经成为最受欢迎的异常检测器。它迭代地在树结构中执行轴平行的数据空间分区,以将偏差的数据对象与其他数据隔离,并且定义为异常得分的对象的隔离难度。 iForest在流行的数据集基准中显示出有效的性能,但其基于轴平行的线性数据分区无效地处理高维/非线性数据空间中的硬异常,甚至更糟糕的是,它导致了臭名昭著的算法偏见。为人工制品区域分配了出乎意料的较大的异常得分。有几个扩展的Iforest,但它们仍然专注于线性数据分区,无法有效地隔离这些硬异常。本文介绍了iforest,深层隔离森林的新型扩展。我们的方法提供了一种综合的隔离方法,可以在任何大小的子空间上任意将数据任意划分数据,从而有效地避免了线性分区中的算法偏置。此外,它仅需要随机初始化的神经网络(即,我们的方法中不需要优化)来确保分区的自由。这样一来,可以完全利用基于网络的随机表示和基于随机分区的隔离的所需随机性和多样性,以显着增强基于隔离集合的异常检测。此外,我们的方法还提供了数据型 - 敏捷的异常检测解决方案。通过简单地插入功能映射中的随机初始化的神经网络来检测不同类型数据中的异常。大量现实数据集的广泛经验结果表明,我们的模型对基于最新的隔离和基于非异常的异常检测模型有了显着改善。
translated by 谷歌翻译
与其他图表相比,图形级异常检测(GAD)描述了检测其结构和/或其节点特征的图表的问题。GAD中的一个挑战是制定图表表示,该图表示能够检测本地和全局 - 异常图,即它们的细粒度(节点级)或整体(图级)属性异常的图形,分别。为了解决这一挑战,我们介绍了一种新的深度异常检测方法,用于通过图表和节点表示的联合随机蒸馏学习丰富的全球和局部正常模式信息。通过训练一个GNN来实现随机初始化网络权重的另一GNN来实现随机蒸馏。来自各种域的16个真实图形数据集的广泛实验表明,我们的模型显着优于七种最先进的模型。代码和数据集可以在https://git.io/llocalkd中获得。
translated by 谷歌翻译
最先进的(SOTA)复杂城市驾驶场景的异常分割方法探索从异常曝光或外部重建模型中了解的像素明智的分类不确定性。然而,之前将高不确定性直接对异常关联的不确定性方法有时可能导致不正确的异常预测,外部重建模型对于实时自动驾驶嵌入式系统往往是过低的。在本文中,我们提出了一种新的异常分段方法,命名为像素 - 明智的能量偏置的弃权学习(PEBAL),探讨了与学习自适应像素级异常类的模型的像素 - 方向弃权学习(AL),以及基于能量的模型(EBM),了解了Inlier像素分布。更具体地说,PEBAL基于EBM和A1的非琐碎的关节训练,其中EBM培训以输出用于异常像素的高能(来自异常曝光),并且培训AL,使得这些高能量像素接受自适应低罚款被纳入异常课程。我们广泛评估PEBAL对抗SOTA,并表明它可以实现四个基准的最佳性能。代码可在https://github.com/tianyu0207/pebal上获得。
translated by 谷歌翻译
无监督的异常检测(UAD)只需要正常(健康)训练图像是实现医学图像分析(MIA)应用的重要工具,例如疾病筛查,因为通常难以收集和注释异常(或疾病)MIA中的图像。然而,严重依赖于正常图像可能导致模型训练过度填写正常类。自我监督的预训练是对这个问题的有效解决方案。遗憾的是,从计算机视觉调整的当前自我监督方法是MIA应用的次优,因为它们不探索设计借口任务或培训过程的MIA域知识。在本文中,我们提出了一种为MIA应用设计的UAD的新的自我监督的预训练方法,通过对比学习(MSACL)命名为多级强大增强。 MSACL基于新颖的优化,以对比正常和多种合成的异常图像,每个类在欧几里德距离和余弦相似度方面强制形成紧密和密集的聚类,其中通过模拟变化数量的病变形成异常图像在正常图像中的不同尺寸和外观。在实验中,我们表明,我们的MSACL预培训使用结肠镜检查,眼底筛选和Covid-19胸部X射线数据集来提高SOTA UAD方法的准确性。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译